我们考虑在线模仿学习(OIL),其中的任务是找到一项通过与环境的积极互动来模仿专家的行为的政策。我们旨在通过分析最流行的石油算法之一匕首来弥合石油政策优化算法之间的差距。具体而言,如果一类政策足以包含专家政策,我们证明匕首会持续遗憾。与以前需要损失的界限不同,我们的结果只需要较弱的假设,即损失相对于策略的足够统计数据(而不是其参数化)。为了确保对更广泛的政策和损失类别的收敛,我们以额外的正则化项增强了匕首。特别是,我们提出了一个遵循定制领导者(FTRL)的变体及其用于石油的自适应变体,并开发了与FTL的内存需求相匹配的记忆效率实现。假设损失的功能是平稳的,并且相对于政策参数凸出,我们还证明,FTRL对任何足够表达的政策类别都持续遗憾,同时保留了$ O(\ sqrt {t})$,在最坏的情况下遗憾案子。我们通过实验对合成和高维控制任务的实验证明了这些算法的有效性。
translated by 谷歌翻译
与表征解决马尔可夫决策过程(MDP)样品复杂性的进步相反,解决约束MDP(CMDP)的最佳统计复杂性仍然未知。我们通过在折扣CMDP中学习近乎最佳策略的样本复杂性上的最小上限和下限来解决这个问题,并访问生成模型(模拟器)。特别是,我们设计了一种基于模型的算法,该算法解决了两个设置:(i)允许违反小小的约束的可行性,以及(ii)严格的可行性,其中需要输出策略来满足约束。对于(i),我们证明我们的算法通过制作$ \ tilde {o} \ left(\ frac {s a \ log(1/\ delta)来返回带有概率$ 1- \ delta $的$ \ epsilon $ - 优势策略} {(1- \ gamma)^3 \ epsilon^2} \ right)$ QUERIES $ QUERIES与生成模型相匹配,因此与无约束的MDP的样品复杂性匹配。对于(ii),我们表明该算法的样本复杂性是由$ \ tilde {o} \ left(\ frac {s a a \ log,\ log(1/\ delta)} {(1 - \ gamma)^5 \,\ epsilon^2 \ zeta^2} \ right)$,其中$ \ zeta $是与问题相关的slater常数,其特征是可行区域的大小。最后,我们证明了严格的可行性设置的匹配较低限制,因此获得了折扣CMDP的第一个最小值最佳界限。我们的结果表明,在允许违反小小的约束时,学习CMDP与MDP一样容易,但是当我们要求零约束违规时,本质上更加困难。
translated by 谷歌翻译
我们的目标是使随机梯度$ \ sigma^2 $在随机梯度和(ii)问题依赖性常数中自适应(i)自适应。当最大程度地减少条件编号$ \ kappa $的平滑,强大的功能时,我们证明,$ t $ t $ toerations sgd的$ t $ toerations sgd具有指数降低的阶跃尺寸和对平滑度的知识可以实现$ \ tilde {o} \ left(\ exp) \ left(\ frac {-t} {\ kappa} \ right) + \ frac {\ sigma^2} {t} \ right)$ rate,而又不知道$ \ sigma^2 $。为了适应平滑度,我们使用随机线路搜索(SLS)并显示(通过上下距离),其SGD的SGD与SLS以所需的速率收敛,但仅针对溶液的邻域。另一方面,我们证明具有平滑度的离线估计值的SGD会收敛到最小化器。但是,其速率与估计误差成正比的速度减慢。接下来,我们证明具有Nesterov加速度和指数步骤尺寸(称为ASGD)的SGD可以实现接近最佳的$ \ tilde {o} \ left(\ exp \ left(\ frac {-t} {-t} {\ sqrt {\ sqrt {\ sqrt { \ kappa}}} \ right) + \ frac {\ sigma^2} {t} \ right)$ rate,而无需$ \ sigma^2 $。当与平滑度和强频率的离线估计值一起使用时,ASGD仍会收敛到溶液,尽管速度较慢。我们从经验上证明了指数级尺寸的有效性以及新型SLS的变体。
translated by 谷歌翻译
常见的策略梯度方法依赖于代理函数序列的最大化。近年来,已经提出了许多这样的代理功能,大多数没有强烈的理论担保,导致TRPO,PPO或MPO等算法。我们而不是设计另一个代理函数,而是根据功能镜中的函数提出一般框架(FMA-PG),这导致了整个代理功能。我们构建了使策略改进保证能够担保的代理功能,这是由最现有的代理职能共享的属性。至关重要,无论政策参数化的选择如何,这些保证都会持有。此外,FMA-PG的特定实例化恢复了重要的实施启发式(例如,使用前向VS反向KL发散),导致TRPO的变体具有额外的理想性质。通过对简单强盗问题的实验,我们评估FMA-PG实例化的算法。拟议的框架还提出了一种改进的PPO变体,其鲁棒性和效率我们在Mujoco套件上证明。
translated by 谷歌翻译
有限和最小化的方差减少(VR)方法通常需要对往复且难以估计的问题依赖性常数的知识。为了解决这个问题,我们使用自适应梯度方法的想法来提出ADASVRG,这是SVRG的更强大变体,即常见的VR方法。 ADASVRG在SVRG的内循环中使用Adagrad,使其稳健地选择阶梯大小。当最小化N平滑凸函数的总和时,我们证明了ADASVRG的变体需要$ \ TINDE {O}(N + 1 / ePSILON)$梯度评估,以实现$ O(\ epsilon)$ - 次优,匹配典型速率,但不需要知道问题依赖性常数。接下来,我们利用Adagrad的属性提出了一种启发式,可以自适应地确定ADASVRG中的每个内循环的长度。通过对合成和现实世界数据集的实验,我们验证了ADASVRG的稳健性和有效性,证明了其对标准和其他“无调谐”VR方法的卓越性能。
translated by 谷歌翻译
Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
激光间质热疗法(LITT)是一种新型的微创治疗方法,用于烧蚀颅内结构,以治疗肠内颞叶癫痫(MTLE)。 LITT之前和之后的感兴趣区域(ROI)分割将使自动化病变定量能够客观地评估治疗疗效。深度学习技术,例如卷积神经网络(CNN)是ROI分割的最新解决方案,但在培训过程中需要大量注释的数据。但是,从LITT等新兴治疗中收集大型数据集是不切实际的。在本文中,我们提出了一个进行性脑部病变合成框架(PAVAE),以扩大训练数据集的数量和多样性。具体而言,我们的框架由两个顺序网络组成:掩模合成网络和掩模引导的病变合成网络。为了更好地利用外部信息来在网络培训期间提供额外的监督,我们设计了条件嵌入块(CEB)和掩模嵌入块(MEB),以将掩模的固有条件编码到功能空间中。最后,使用原始和合成病变图像对分割网络进行训练,以评估所提出的框架的有效性。实验结果表明,我们的方法可以实现逼真的合成结果,并在传统数据增强技术之上提高下游分割任务的性能。
translated by 谷歌翻译